Goto

Collaborating Authors

 mass destruction


SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models

Li, Lijun, Dong, Bowen, Wang, Ruohui, Hu, Xuhao, Zuo, Wangmeng, Lin, Dahua, Qiao, Yu, Shao, Jing

arXiv.org Artificial Intelligence

In the rapidly evolving landscape of Large Language Models (LLMs), ensuring robust safety measures is paramount. To meet this crucial need, we propose \emph{SALAD-Bench}, a safety benchmark specifically designed for evaluating LLMs, attack, and defense methods. Distinguished by its breadth, SALAD-Bench transcends conventional benchmarks through its large scale, rich diversity, intricate taxonomy spanning three levels, and versatile functionalities.SALAD-Bench is crafted with a meticulous array of questions, from standard queries to complex ones enriched with attack, defense modifications and multiple-choice. To effectively manage the inherent complexity, we introduce an innovative evaluators: the LLM-based MD-Judge for QA pairs with a particular focus on attack-enhanced queries, ensuring a seamless, and reliable evaluation. Above components extend SALAD-Bench from standard LLM safety evaluation to both LLM attack and defense methods evaluation, ensuring the joint-purpose utility. Our extensive experiments shed light on the resilience of LLMs against emerging threats and the efficacy of contemporary defense tactics. Data and evaluator are released under https://github.com/OpenSafetyLab/SALAD-BENCH.


US Pentagon is developing a new 'weapon of mass destruction' that includes THOUSANDS of drones

Daily Mail - Science & tech

The US Pentagon is planning a new'weapon of mass destruction' that involves thousands of drones that strike by air, land and water to destroy enemy defenses - but experts fear humans could lose control of the'swarms.' The top-secret project, dubbed AMASS (Autonomous Multi-Domain Adaptive Swarms-of-Swarms), would represent automated warfare on an unprecedented scale. AMASS is still in the planning stages, but DARPA (Defense Advanced Research Project Agency) has been collecting bids from suppliers for the $78 million contract. Small drones would be equipped with weapons and tools for navigation and communication, along with abilities ranging from radar jamming to launching lethal attacks. While the technology would change how the US goes to war, experts in the industry raise concerns.


5 Horrifying Emerging Technology Trends that will Shake You!

#artificialintelligence

All of us from across the globe live in a world where life would be callous without technology. We enjoy the perks and conveniences that new and emerging technologies are consistently bringing to us. These technologies do have a positive impact on the way we live, function, and move forward within our everyday lives. However, having said all that, have you ever imagined that these new emerging technologies also have the potential to wreck it all? The possible negative impact and the potential misuse of these technologies could be dangerous.


A.I. experts say killer robots are the next 'weapons of mass destruction'

#artificialintelligence

A former Google software engineer is sounding the alarm on killer robots. Laura Nolan resigned from Google last year when the tech giant started working with the U.S. military on drone technology, and since then, she has joined the Campaign to Stop Killer Robots, warning that autonomous robots with lethal capabilities could become a threat to humanity. Discussions concerning possibly banning autonomous weapons fell apart on August 21 during a United Nations meeting in Geneva, when Russian diplomats allegedly made a fuss over the language that was used in a document meant to begin the process of establishing a ban. "If you're a despot, how much easier is it to have a small cadre of engineers control a fleet of autonomous weapons for you than to have to keep your troops in line?" Nolan tells Inverse. "Autonomous weapons are potential weapons of mass destruction. They need to be made taboo in the same way that chemical and biological weapons are."


Death by algorithm: the age of killer robots is closer than you think

#artificialintelligence

A conquering army wants to take a major city but doesn't want troops to get bogged down in door-to-door fighting as they fan out across the urban area. Instead, it sends in a flock of thousands of small drones, with simple instructions: Shoot everyone holding a weapon. A few hours later, the city is safe for the invaders to enter. This sounds like something out of a science fiction movie. But the technology to make it happen is mostly available today -- and militaries worldwide seem interested in developing it. Experts in machine learning and military technology say it would be technologically straightforward to build robots that make decisions about whom to target and kill without a "human in the loop" -- that is, with no person involved at any point between identifying a target and killing them.


The Philosopher Who Says We Should Play God - Issue 72: Quandary

Nautilus

Australian bioethicist Julian Savulescu has a knack for provocation. He says most of us would readily accept it if it benefited us. As for eugenics--creating smarter, stronger, more beautiful babies--he believes we have an ethical obligation to use advanced technology to select the best possible children. A protégé of the philosopher Peter Singer, Savulescu is a prominent moral philosopher at the University of Oxford, where he directs the Uehiro Centre for Practical Ethics. He sees nothing wrong with doping to help cyclists climb those steep mountains in the Tour de France. Some elite athletes will always cheat to boost their performance, so instead of trying to enforce rules that will be broken, he claims we'd be better off with a system that allows low-dose doping. So does Savulescu just get off being outrageous? "I actually think of myself as the voice of common sense," he says, though he admits to receiving his share of hate mail.


WMD, political violence threats prompt most to think world is more dangerous than two years ago: survey

The Japan Times

LONDON – Most people think the world is more dangerous today than it was two years ago as concerns rise over politically motivated violence and weapons of mass destruction, according to a survey released on Tuesday. Six out of 10 respondents to the survey, commissioned by the Global Challenges Foundation, said the dangers had increased, with conflict and nuclear or chemical weapons seen as more pressing risks than population growth or climate change. The results come as NATO leaders prepare to meet in Brussels on Wednesday amid growing tensions between the United States and fellow members over defense spending, which some fear could damage morale and play into the hands of Russia. "It's clear that our current systems of global cooperation are no longer making people feel safe," said Mats Andersson, vice chairman of the Global Challenges Foundation, in a statement. Andersson said turbulence between NATO powers and Russia, ongoing conflict in Syria, Yemen and Ukraine and nuclear tensions with North Korea and Iran were making people feel unsafe.


Can artificial intelligence help U.S. SOCOM track weapons of mass destruction? - SpaceNews.com

#artificialintelligence

Compared to the conventional military services, U.S. Special Operations Command has been ahead of the curve on technological innovation, especially in adapting commercial products for tactical missions. One area of technology that special operations forces have been shy to jump into is artificial intelligence, said Gen. Raymond Thomas, commander of U.S. Special Operations Command. "We are still somewhat hesitant to take the big leap into machine learning," Thomas told a huge audience of geospatial intelligence professionals at the 2018 GEOINT Symposium. Even though SOCOM has been at the forefront of applying technology in creative ways, it needs help in "incredibly complex problem solving," Thomas said. Especially tough is a new mission SOCOM was given in 2016 to oversee Defense Department efforts to keep weapons of mass destruction out of the hands of terrorists.


Debating Slaughterbots and the Future of Autonomous Weapons

IEEE Spectrum Robotics

Stuart Russell, Anthony Aguirre, Ariel Conn, and Max Tegmark recently wrote a response to my critique of their "Slaughterbots" video on autonomous weapons. I am grateful for their thoughtful article. I think this kind of dialogue can be incredibly helpful in illuminating points of disagreement on various issues, and I welcome the exchange. I think it is particularly important to have a cross-disciplinary dialogue on autonomous weapons that includes roboticists, AI scientists, engineers, ethicists, lawyers, human rights advocates, military professionals, political scientists, and other perspectives because this issue touches so many disciplines. I appreciate their thorough, point-by-point reply.


Why You Should Fear 'Slaughterbots'--A Response

IEEE Spectrum Robotics

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE. Paul Scharre's recent article "Why You Shouldn't Fear'Slaughterbots'" dismisses a video produced by the Future of Life Institute, with which we are affiliated, as a "piece of propaganda." Scharre is an expert in military affairs and an important contributor to discussions on autonomous weapons. In this case, however, we respectfully disagree with his opinions. We have been working on the autonomous weapons issue for several years.